Search Results: "julian"

7 March 2020

Julian Andres Klode: APT 2.0 released

After brewing in experimental for a while, and getting a first outing in the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable. 1.10 would be a boring, weird number, eh? Compared to the 1.8 series, the APT 2.0 series features several new features, as well as improvements in performance, hardening. A lot of code has been removed as well, reducing the size of the library.

Highlighted Changes Since 1.8

New Features
  • Commands accepting package names now accept aptitude-style patterns. The syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for more details.
  • apt(8) now waits for the dpkg locks - indefinitely, when connected to a tty, or for 120s otherwise.
  • When apt cannot acquire the lock, it prints the name and pid of the process that currently holds the lock.
  • A new satisfy command has been added to apt(8) and apt-get(8)
  • Pins can now be specified by source package, by prepending src: to the name of the package, e.g.:
    Package: src:apt
    Pin: version 2.0.0
    Pin-Priority: 990
    
    Will pin all binaries of the native architecture produced by the source package apt to version 2.0.0. To pin packages across all architectures, append :any.

Performance
  • APT now uses libgcrypt for hashing instead of embedded reference implementations of MD5, SHA1, and SHA2 hash families.
  • Distribution of rred and decompression work during update has been improved to take into account the backlog instead of randomly assigning a worker, which should yield higher parallelization.

Incompatibilities
  • The apt(8) command no longer accepts regular expressions or wildcards as package arguments, use patterns (see New Features).

Hardening
  • Credentials specified in auth.conf now only apply to HTTPS sources, preventing malicious actors from reading credentials after they redirected users from a HTTP source to an http url matching the credentials in auth.conf. Another protocol can be specified, see apt_auth.conf(5) for the syntax.

Developer changes
  • A more extensible cache format, allowing us to add new fields without breaking the ABI
  • All code marked as deprecated in 1.8 has been removed
  • Implementations of CRC16, MD5, SHA1, SHA2 have been removed
  • The apt-inst library has been merged into the apt-pkg library.
  • apt-pkg can now be found by pkg-config
  • The apt-pkg library now compiles with hidden visibility by default.
  • Pointers inside the cache are now statically typed. They cannot be compared against integers (except 0 via nullptr) anymore.

python-apt 2.0 python-apt 2.0 is not yet ready, I m hoping to add a new cleaner API for cache access before making the jump from 1.9 to 2.0 versioning.

libept 1.2 I ve moved the maintenance of libept to the APT team. We need to investigate how to EOL this properly and provide facilities inside APT itself to replace it. There are no plans to provide new features, only bugfixes / rebuilds for new apt versions.

7 November 2017

Reproducible builds folks: Reproducible Builds: Weekly report #132

Here's what happened in the Reproducible Builds effort between Sunday October 29 and Saturday November 4 2017: Past events Upcoming events Reproducible work in other projects Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 7 package reviews have been added, 43 have been updated and 47 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: Documentation updates diffoscope development Version 88 was uploaded to unstable by Mattia Rizzolo. It included contributions (already covered by posts of the previous weeks) from: strip-nondeterminism development Version 0.040-1 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks, as well as new ones from:
Version 0.5.2-2 was uploaded to unstable by Holger Levsen. It included contributions already covered by posts of the previous weeks, as well as new ones from: reprotest development buildinfo.debian.net development tests.reproducible-builds.org Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

23 October 2017

Julian Andres Klode: APT 1.6 alpha 1 seccomp and more

I just uploaded APT 1.6 alpha 1, introducing a very scary thing: Seccomp sandboxing for methods, the programs downloading files from the internet and decompressing or compressing stuff. With seccomp I reduced the number of system calls these methods can use to 149 from 430. Specifically we excluded most ways of IPC, xattrs, and most importantly, the ability for methods to clone(2), fork(2), or execve(2) (or execveat(2)). Yes, that s right methods can no longer execute programs. This was a real problem, because the http method did in fact execute programs there is this small option called ProxyAutoDetect or Proxy-Auto-Detect where you can specify a script to run for an URL and the script outputs a (list of) proxies. In order to be able to seccomp the http method, I moved the invocation of the script to the parent process. The parent process now executes the script within the sandbox user, but without seccomp (obviously). I tested the code on amd64, ppc64el, s390x, arm64, mipsel, i386, and armhf. I hope it works on all other architectures libseccomp is currently built for in Debian, but I did not check that, so your apt might be broken now if you use powerpc, powerpcspe, armel, mips, mips64el, hhpa, or x32 (I don t think you can even really use x32). Also, apt-transport-https is gone for good now. When installing the new apt release, any installed apt-transport-https package is removed (apt breaks apt-transport-https now, but it also provides it versioned, so any dependencies should still be satisfiable). David also did a few cool bug fixes again, finally teaching apt-key to ignore unsupported GPG key files instead of causing weird errors
Filed under: Uncategorized

30 September 2017

Chris Lamb: Free software activities in September 2017

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced either maliciously or accidentally during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area. This month I:
  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)


I also made the following changes to our tooling:
reproducible-check

reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]
I also blogged about this utility. [...]
diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in setup.py. [...]
    • Compare types with identity not equality. [...] [...]
    • Use logging.py's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.



Debian My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.
Lintian I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers: I also blogged specifically about the Lintian 2.5.54 release.

Patches contributed
  • debconf: Please add a context manager to debconf.py. (#877096)
  • nm.debian.org: Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)
I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.

Debian LTS

This month I have been paid to work 15 hours on Debian Long Term Support (LTS). In that time I did the following:
  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.

Uploads
  • python-django:
    • 1.11.5-1 New upstream security release. (#874415)
    • 1.11.5-2 Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 New upstream release.
    • 4.0.2-2 Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 Upload to stretch-backports.
  • aptfs (0.11.0-1) New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) New upstream release.
  • docbook-to-man (1:2.0.0-39) Tighten autopkgtests and enable testing via travis.debian.net.
  • python-daiquiri (1.3.0-1) New upstream release.

I also made the following non-maintainer uploads (NMUs):

Debian bugs filed
  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • bugs.debian.org: Please explicitly link to packages,tracker .debian.org. (#876746)
  • Requests for packaging:
    • selfspy log everything you do on the computer. (#873955)
    • shoogle use the Google API from the shell. (#873916)

FTP Team

As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston. I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

24 September 2017

Julian Andres Klode: APT 1.5 is out

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty. This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https. Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files: The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot. An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time. Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8) We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again). We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 we now link against it normally. Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable. Paul Wise contributed patches to allow configuring the apt-daily intervals more easily apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify always to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms). Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don t expect any life-changing changes for now. I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs. Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so ).
Filed under: Debian, Ubuntu

19 September 2017

Reproducible builds folks: Reproducible Builds: Weekly report #125

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017: Upcoming events Reproduciblity work in Debian devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb. #876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables. Bugs filed: Non-maintainer uploads: Reproduciblity work in other projects Patches sent upstream: Reviews of unreproducible packages 16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues. 1 issue type has been updated: diffoscope development reprotest development trydiffoscope development Version 65 was uploaded to unstable by Chris Lamb including these contributions: Reproducible websites development tests.reproducible-builds.org Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 September 2017

Reproducible builds folks: Reproducible Builds: Weekly report #124

Here's what happened in the Reproducible Builds effort between Sunday September 3 and Saturday September 9 2017: Media coverage GSoC and Outreachy updates Debian will participate in this year's Outreachy initiative and the Reproducible Builds is soliciting mentors and students to join this round. For more background please see the following mailing list posts: 1, 2 & 3. Reproduciblity work in Debian In addition, the following NMUs were accepted: Reproduciblity work in other projects Patches sent upstream: Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 3 package reviews have been added, 2 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Development continued in git, including the following contributions: Mattia Rizzolo also uploaded the version 86 released last week to stretch-backports. reprotest development tests.reproducible-builds.org Misc. This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

17 August 2017

Julian Andres Klode: Why TUF does not shine (for APT repositories)

In DebConf17 there was a talk about The Update Framework, short TUF. TUF claims to be a plug-in solution to software updates, but while it has the same practical level of security as apt, it also has the same shortcomings, including no way to effectively revoke keys. TUF divides signing responsibilities into roles: A root role, a targets rule (signing stuff to download), a snapshots rule (signing meta data), and a time stamp rule (signing a time stamp file). There also is a mirror role for signing a list of mirrors, but we can ignore that for now. It strongly recommends that all keys except for timestamp and mirrors are kept offline, which is not applicable for APT repositories Ubuntu updates the repository every 30 minutes, imagine doing that with offline keys. An insane proposal. In APT repositories, we effectively only have a snapshots rule the only thing we sign are Release files, and trust is then chained down by hashes (Release files hashes Packages index files, and they have hashes of individual packages). The keys used to sign repositories are online keys, after all, all the metadata files change every 30 minutes (Ubuntu) or 6 hours (Debian) it s impossible to sign them by hand. The timestamp role is replaced by a field in the Release file specifying until when the Release file is considered valid. Let s check the attacks TUF protects again: As we can see, APT addresses all attacks TUF addresses. But both do not handle key revocation. So, if a key & mirror gets compromised (or just key and the mirror is MITMed), we cannot inform the user that the key has been compromised and block updates from the compromised repository. I just wrote up a proposal to allow APT to query for revoked keys from a different host with a key revocation list (KRL) file that is signed by different keys than the repository. This would solve the problem of key revocation easily even if the repository host is MITMed or compromised, we can still revoke the keys signing the repository from a different location.
Filed under: Debian, Ubuntu

14 August 2017

Reproducible builds folks: Reproducible Builds: Weekly report #119

Here's what happened in the Reproducible Builds effort between Sunday July 30 and Saturday August 5 2017: Media coverage We were mentioned on Late Night Linux Episode 17, around 29:30. Packages reviewed and fixed, and bugs filed Upstream packages: Debian packages: Reviews of unreproducible packages 29 package reviews have been added, 72 have been updated and 151 have been removed in this week, adding to our knowledge about identified issues. 4 issue types have been updated: Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Version 85 was uploaded to unstable by Mattia Rizzolo. It included contributions from: as well as previous weeks' contributions, summarised in the changelog. There were also further commits in git, which will be released in a later version: Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

1 August 2017

Reproducible builds folks: Reproducible Builds: Weekly report #118

Here's what happened in the Reproducible Builds effort between Sunday July 23 and Saturday July 29 2017: Toolchain development and fixes Packages reviewed and fixed, and bugs filed Reviews of unreproducible packages 4 package reviews have been added, 2 have been updated and 24 have been removed in this week, adding to our knowledge about identified issues. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development Misc. This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

25 July 2017

Reproducible builds folks: Reproducible Builds: week 117 in Buster cycle

Here's what happened in the Reproducible Builds effort between Sunday July 16 and Saturday July 22 2017: Toolchain development Bernhard M. Wiedemann wrote a tool to automatically run through different sources of non-determinism, and report which of these caused irreproducibility. Dan Kegel's patches to fpm were merged. Bugs filed Patches submitted upstream: Patches filed in Debian: Reviews of unreproducible packages 73 package reviews have been added, 44 have been updated and 50 have been removed in this week, adding to our knowledge about identified issues. No issue types were updated. Weekly QA work During our reproducibility testing, FTBFS bugs have been detected and reported by: diffoscope development reprotest development Ximin also restarted the discussion with autopkgtest-devel about code reuse for reprotest. Santiago Torres began a series of patches to make reprotest more distro-agnostic, with the aim of making it usable on Arch Linux. Ximin reviewed these patches. Misc. This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

24 June 2017

Ingo Juergensmann: Upgrade to Debian Stretch - GlusterFS fails to mount

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:
[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/letsencrypt.sh/certs'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:
JoeJulian ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).
Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:
We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.
Setting this option instantly fixed my issues with mounting glusterfs storages. So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.
Kategorie:

15 May 2017

Gunnar Wolf: Starting a project on private and anonymous network usage

I am starting a work with the students of LIDSOL (Laboratorio de Investigaci n y Desarrollo de Software Libre, Free Software Research and Development Laboratory) of the Engineering Faculty of UNAM:

We want to dig into the technical and social implications of mechanisms that provide for anonymous, private usage of the network. We will have our first formal work session this Wednesday, for which we have invited several interesting people to join the discussion and help provide a path for our oncoming work. Our invited and confirmed guests are, in alphabetical order:

22 February 2017

Antoine Beaupr : The case against password hashers

In previous articles, we have looked at how to generate passwords and did a review of various password managers. There is, however, a third way of managing passwords other than remembering them or encrypting them in a "vault", which is what I call "password hashing". A password hasher generates site-specific passwords from a single master password using a cryptographic hash function. It thus allows a user to have a unique and secure password for every site they use while requiring no storage; they need only to remember a single password. You may know these as "deterministic or stateless password managers" but I find the "password manager" phrase to be confusing because a hasher doesn't actually store any passwords. I do not think password hashers represent a good security tradeoff so I generally do not recommend their use, unless you really do not have access to reliable storage that you can access readily. In this article, I use the word "password" for a random string used to unlock things, but "token" to represent a generated random string that the user doesn't need to remember. The input to a password hasher is a password with some site-specific context and the output from a password hasher is a token.

What is a password hasher? A password hasher uses the master password and a label (generally the host name) to generate the site-specific password. To change the generated password, the user can modify the label, for example by appending a number. Some password hashers also have different settings to generate tokens of different lengths or compositions (symbols or not, etc.) to accommodate different site-specific password policies. The whole concept of password hashers relies on the concept of one-way cryptographic hash functions or key derivation functions that take an arbitrary input string (say a password) and generate a unique token, from which it is impossible to guess the original input string. Password hashers are generally written as JavaScript bookmarklets or browser plugins and have been around for over a decade. The biggest advantage of password hashers is that you only need to remember a single password. You do not need to carry around a password manager vault: there's no "state" (other than site-specific settings, which can be easily guessed). A password hasher named Master Password makes a compelling case against traditional password managers in its documentation:
It's as though the implicit assumptions are that everybody backs all of their stuff up to at least two different devices and backups in the cloud in at least two separate countries. Well, people don't always have perfect backups. In fact, they usually don't have any.
It goes on to argue that, when you lose your password: "You lose everything. You lose your own identity." The stateless nature of password hashers also means you do not need to use cloud services to synchronize your passwords, as there is (generally, more on that later) no state to carry around. This means, for example, that the list of accounts that you have access to is only stored in your head, and not in some online database that could be hacked without your knowledge. The downside of this is, of course, that attackers do not actually need to have access to your password hasher to start cracking it: they can try to guess your master key without ever stealing anything from you other than a single token you used to log into some random web site. Password hashers also necessarily generate unique passwords for every site you use them on. While you can also do this with password managers, it is not an enforced decision. With hashers, you get distinct and strong passwords for every site with no effort.

The problem with password hashers If hashers are so great, why would you use a password manager? Programs like LessPass and Master Password seem to have strong crypto that is well implemented, so why isn't everyone using those tools? Password hashing, as a general concept, actually has serious problems: since the hashing outputs are constantly compromised (they are sent in password forms to various possibly hostile sites), it's theoretically possible to derive the master password and then break all the generated tokens in one shot. The use of stronger key derivation functions (like PBKDF2, scrypt, or HMAC) or seeds (like a profile-specific secret) makes those attacks much harder, especially if the seed is long enough to make brute-force attacks infeasible. (Unfortunately, in the case of Password Hasher Plus, the seed is derived from Math.random() calls, which are not considered cryptographically secure.) Basically, as stated by Julian Morrison in this discussion:
A password is now ciphertext, not a block of line noise. Every time you transmit it, you are giving away potential clues of use to an attacker. [...] You only have one password for all the sites, really, underneath, and it's your secret key. If it's broken, it's now a skeleton-key [...]
Newer implementations like LessPass and Master Password fix this by using reasonable key derivation algorithms (PBKDF2 and scrypt, respectively) that are more resistant to offline cracking attacks, but who knows how long those will hold? To give a concrete example, if you would like to use the new winner of the password hashing competition (Argon2) in your password manager, you can patch the program (or wait for an update) and re-encrypt your database. With a password hasher, it's not so easy: changing the algorithm means logging in to every site you visited and changing the password. As someone who used a password hasher for a few years, I can tell you this is really impractical: you quickly end up with hundreds of passwords. The LessPass developers tried to facilitate this, but they ended up mostly giving up. Which brings us to the question of state. A lot of those tools claim to work "without a server" or as being "stateless" and while those claims are partly true, hashers are way more usable (and more secure, with profile secrets) when they do keep some sort of state. For example, Password Hasher Plus records, in your browser profile, which site you visited and which settings were used on each site, which makes it easier to comply with weird password policies. But then that state needs to be backed up and synchronized across multiple devices, which led LessPass to offer a service (which you can also self-host) to keep those settings online. At this point, a key benefit of the password hasher approach (not keeping state) just disappears and you might as well use a password manager. Another issue with password hashers is choosing the right one from the start, because changing software generally means changing the algorithm, and therefore changing passwords everywhere. If there was a well-established program that was be recognized as a solid cryptographic solution by the community, I would feel more confident. But what I have seen is that there are a lot of different implementations each with its own warts and flaws; because changing is so painful, I can't actually use any of those alternatives. All of the password hashers I have reviewed have severe security versus usability tradeoffs. For example, LessPass has what seems to be a sound cryptographic implementation, but using it requires you to click on the icon, fill in the fields, click generate, and then copy the password into the field, which means at least four or five actions per password. The venerable Password Hasher is much easier to use, but it makes you type the master password directly in the site's password form, so hostile sites can simply use JavaScript to sniff the master password while it is typed. While there are workarounds implemented in Password Hasher Plus (the profile-specific secret), both tools are more or less abandoned now. The Password Hasher homepage, linked from the extension page, is now a 404. Password Hasher Plus hasn't seen a release in over a year and there is no space for collaborating on the software the homepage is simply the author's Google+ page with no information on the project. I couldn't actually find the source online and had to download the Chrome extension by hand to review the source code. Software abandonment is a serious issue for every project out there, but I would argue that it is especially severe for password hashers. Furthermore, I have had difficulty using password hashers in unified login environments like Wikipedia's or StackExchange's single-sign-on systems. Because they allow you to log in with the same password on multiple sites, you need to choose (and remember) what label you used when signing in. Did I sign in on stackoverflow.com? Or was it stackexchange.com? Also, as mentioned in the previous article about password managers, web-based password managers have serious security flaws. Since more than a few password hashers are implemented using bookmarklets, they bring all of those serious vulnerabilities with them, which can range from account name to master password disclosures. Finally, some of the password hashers use dubious crypto primitives that were valid and interesting a decade ago, but are really showing their age now. Stanford's pwdhash uses MD5, which is considered "cryptographically broken and unsuitable for further use". We have seen partial key recovery attacks against MD5 already and while those do not allow an attacker to recover the full master password yet (especially not with HMAC-MD5), I would not recommend anyone use MD5 in anything at this point, especially if changing that algorithm later is hard. Some hashers (like Password Hasher and Password Plus) use a a single round of SHA-1 to derive a token from a password; WPA2 (standardized in 2004) uses 4096 iterations of HMAC-SHA1. A recent US National Institute of Standards and Technology (NIST) report also recommends "at least 10,000 iterations of the hash function".

Conclusion Forced to suggest a password hasher, I would probably point to LessPass or Master Password, depending on the platform of the person asking. But, for now, I have determined that the security drawbacks of password hashers are not acceptable and I do not recommend them. It makes my password management recommendation shorter anyway: "remember a few carefully generated passwords and shove everything else in a password manager". [Many thanks to Daniel Kahn Gillmor for the thorough reviews provided for the password articles.]
Note: this article first appeared in the Linux Weekly News. Also, details of my research into password hashers are available in the password hashers history article.

14 February 2017

Julian Andres Klode: jak-linux.org moved / backing up

In the past two days, I moved my main web site jak-linux.org (and jak-software.de) from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by uberspace.de. Encryption is provided by Let s Encrypt. I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that s quite a good time  And, for those of you who don t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That s quite cool. I m considering migrating the blog away from wordpress at some point in the future having a more integrated experience is a bit nicer than having my web presence split over two sites. I m unsure if I shouldn t add something like cloudflare there I don t want to overload the servers (but I only serve static pages, so how much load is this really going to get?). in other news: off-site backups I also recently started doing offsite backups via borg to a server operated by the wonderful rsync.net. For those of you who do not know rsync.net: You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there. The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. you just have to send them an email. Finally, I must say that uberspace and rsync.net feel similar in spirit. Both heavily emphasise the command line, and don t really have any fancy click stuff. I like that.
Filed under: General

22 December 2016

Urvika Gola: Outreachy- Week 1,2 Progress

Since the past few weeks I have been researching and working on creating a white label version of Lumicall with my mentors Daniel, Juliana and Bruno. Lumicall is a free and convenient app for making encrypted phone calls from Android. It uses the SIP protocol to interoperate with other apps and corporate telephone systems. Think of any app that you use to call others using an SIP ID. What does it mean to make a white label version of Lumicall?
White labelling is the idea of taking the whole or piece of Lumicall s code by business users and tweaking it according to their requirements and functionality they want. The white label version would have client s name, logo, icons, themes etc which reflect their brand. Although, the underlying working of both the applications would be same. Think of it like getting a cake from the local bakery, putting it into a home baking dish and passing it off as something you made in front of your friends and earning all the praise. Except cake is more appealing than apps.
What whitelabelling does is it takes away the pain of collecting cake ingredients, mixing them in right proportion, baking at the right temperature and uses the experience of one cake base to create more and more fancier, prettier, grander cakes. This lets the cool bakers (or the business users of lumicall) to focus on other aspects of the party like decoration, games and drinks (or monetization, publicity and new features)
Now you would wonder who these cool bakers or business users of lumicall would be? While researching this work that despite the commonalities, there would need to be a unique identifier for any application in the app store. I found that there is difference between the package name and the applicationID. When I create a new project in Android Studio, the applicationId exactly matches the Java-style package name I chose during setup. However, the application ID and package name are independent of each other beyond this point. The thing to keep in mind is that app stores identify as a changed application ID as a different app altogether.

So if I were to make N separate copies of the application code for N clients, it would be a maintenance nightmare, If the build system is Gradle, using product flavors is a trick that will make this maintenance easier. Instead of N separate copies, I would simply have N product flavors. Each flavor corresponds to a customized version of my application. Pro, free, whitelabel, Debug (for development purposes), Release (for production purposes) are the basic flavors I have identified.

Each flavor would use the same source code and files of the application but resources, icons, manifests etc that are specific to each flavor can be defined again in main/src/flavor_name directory under res folders etc. according to the requirement.
Here is a snippet from my build.gradle file::

snippetcrop Note: You have to define at least two flavors to be able to build multiple variants.
Because once you have flavors, you can only build flavored application. You can t just build default configuration anymore. Define an empty flavor, with no applicationIdSuffix at all, and it will use all of the default config section.

This week I d be moving forward with the implementation of whitelabelling using productFlavors in Lumicall.
I would love to hear from someone who has done this before!
Comment here or email me and I promise you an excellent home-made-store-brought cake!

Wishing you all HAPPY HOLIDAYS! !
-U

Shirish Agarwal: The wine and dine at debconf16

For the wine connoisseur

FOR the Wine Connoisseur

All photos courtesy KK . If any deviations, would put up labels sharing whose copyright it is. Before I get into all of that, I was curious about Canada and taking the opportunity of debconf happening there in a few months, asked few people what they thought of digital payments, fees and expenses in their country and if plastic cash is indeed used therein. The first to answer was Tyler McDonald (no idea if he is anyway related to the fast-food chain McDonalds which is a worldwide operation.) This is what he had to say/share
You can use credit / debit cards almost everywhere. Restaurant waiters also usually have wireless credit / debit terminals that they will bring to your table for you to settle your bill. How much your bank charges depends on your Canadian bank and the banking plan you are on. For instance, on my plan through the Bank Of Montreal, I get (I think) 20 free transactions a month and then after that I m charged $0.50CDN/piece. However, if I go to a Bank Of Montreal ATM and withdraw cash, there is no service fee for that. There is no service fee for using *credit* cards, only *debit* cards tend to have the fee. I live in a really rural area so I can t always get to a Bank Of Montreal machine for cash. So what I usually end up doing, is either pay by credit and then pay of the balance right away so I don t have to pay interest, or when I do use my bank card to pay for something, I ask if I can get cash back as well. Yes, Canada converted to plastic notes a few years ago. We ve also eliminated the penny. For cashless transactions, you pay the exact amount billed. If you re paying somebody in cash, it is rounded up or down to the nearest 5 cents. And for $1 or $2, instead of notes, we ve moved over to coins. I personally like the plastic notes. They re smoother and feel more durable than the paper notes. I ve had one go through a laundry load by accident and it came out the other side fine.
Another gentleman responded with slightly more information which probably would interest travellers from around the world, not just Indians
Quebec has its own interbank system called Interac (https://interac.ca/en/about/our-company.html). Quebec is a very proud and independent region and for many historical reasons they want to stand on their own, which is why they support their local systems. Some vendors will support only Interac for debit card transactions (at least this was the case when I stayed there the beginning of this decade, it might have changed a bit). *Most* vendors (including supermarkets like Provigo, Metro, etc) will accept major credit and debit cards, although MasterCard isn t accepted as widely there as Visa is. So, if you have one of both, load up your Visa card instead of your MasterCard or get a prepaid Visa card from your bank. They support chip cards everywhere so don t worry about that. If you have a 5 digit pin on any of your cards and a vendor asks you for a 4 digit pin, it will work 90%+ of the times if you just enter the first 4 digits, but it s usually a good idea to go change your pin to a 4 digit just to be safe.
From the Indian perspective all of the above fits pretty neat as we also have Pin and Chip cards (domestically though most ATMs still use the magnetic strip and is suspected that the POS terminals aren t any better.) That would be a whole different story so probably left for another day. I do like the bit about pocketing the change tip. As far as number of free transactions go, it was pretty limited in India for few years before the demonetization happening now. Few years before, I do remember doing as many transactions on the ATM as I please but then ATM s have seen a downward spiral in terms of technology upgradation, maintenance etc. There is no penalty to the bank if the ATM is out-of-order. If there was significant penalty then we probably would have seen banks taking more care of ATM s. It is a slightly more complex topic hence would take a stab at it some other day. Do hope though that the terms for ATM usage for bank customers become lenient similar to Canada otherwise it would be difficult for Indians to jump on the digital band-wagon as you cannot function without cheap, user-friendly technology. Cash machines: Uneven spread, slowing growth - Copyright Indian Express The image has been taken from this fascinating article which appeared in Indian Express couple of days back. Coming back to the cheese and wine in the evening. I think we started coming back from Eagle Encounters around 16:30/17:00 hrs Cape Town time. Somehow the ride back was much more faster and we played some Bollywood party music while coming back (all cool). Suddenly remembered that I had to buy some cheese as I hadn t bought any from India. There is quite a bit of a post where I m trying to know/understand if spices can be smuggled (which much later I learnt I didn t need to but that s a different story altogether), I also had off-list conversations with people about cheese as well but wasn t able to get any good recommendations. Then saw that KK bought Mysore Pak (apparently she took a chance not declaring it) which while not being exactly cheese fit right into things. In her own words a South Indian ghee sweet fondly nicknamed the blocks of cholesterol and reason #3 for bypass surgery . KK So with Leonard s help we stopped at a place where it looked like a chain of stores. Each store was selling something. Seeing that, I was immediately transported to Connaught Place, Delhi Connaught Place, Delhi The image comes from http://planetden.com/food/roundabout-world-connaught-place-delhi which attempts to explain Connaught Place. While the article is okish, it lacks soul and not written like a Delhite would write or anybody who has spent a chunk having spent holidays at CP. Another day, another story, sorry. What I found interesting about the stores while they were next to each other, I also eyed an alcohol shop as well as an Adult/Sex shop. I asked Leonard as to how far we were from UCT and he replied hardly 5 minutes by car and was shocked to see both alcohol and a sex shop. While an alcohol shop some distance away from a college is understandable, there are few and far around Colleges all over India, but adult shops are a rarity. Unfortunately, none of us have any photos of the place as till that time everybody s phone was dead or just going to be dead and nobody had thought to bring a portable power pack to juice our mobile devices. A part of me was curious to see what the sex shop would have and look from inside, but as was with younger people didn t think it was appropriate. All of us except Jaminy and someone else (besides Leonard) decided to stay back, while the rest of us went inside to explore the stores. It took me sometime to make my way to the cheese corner and had no idea which was good and which wasn t. So with no idea of brands therein, the only way to figure out was the pricing. So bought two, one a larger 500 gm cheap piece and a smaller slightly more expensive one just to make sure that the Debian cheese team would be happy. We did have a mini-adventure as for sometime Jaminy was missing, apparently she went goofing off or went to freshen up or something and we were unable to connect with her as all our phones were dead or dying. Eventually we came back to UCT, barely freshened up when it was decided by our group to go and give our share of goodies to the cheese and wine party. When I went up to the room to share the cheese, came to know they needed a volunteer for cutting veggies etc. Having spent years seeing Yan Can Cook and having practised quite a bit tried to do some fancy decoration and some julian cutting but as we got dull knives and not much time, just did some plain old cutting
The salads

The Salads, partly done by me.

I have to share I had a fascinating discussion about cooking in Pressure Cookers. I was under the assumption that everybody knows how to use Pressure Cookers as they are one of the simplest ways to cook food without letting go of all the nutrients. At least, I believe this to be predominant in the Asian subcontinent and even the chinese have similar vessels for cooking. I use what is called the first generation Pressure Cooker. I have been using a 1.5 l Prestige Pressure Cooker over half a decade, almost used daily without issues. http://www.amazon.in/Prestige-Nakshatra-Aluminium-Pressure-Cooker/dp/B00CB7U1OU
Prestige 1.5 L Pressure Cooker

1.5 Litre Pressure Cooker with gasket and everything.

There are also induction pressure cookers nowadays in the Indian market and this model https://www.amazon.in/Prestige-Deluxe-Induction-Aluminum-Pressure/dp/B01KZVPNGE/ref=sr_1_2
Induction base cooking for basmati rice

Best cooker for doing Basmati Biryanis and things like that.

Basmati is long-grain, aromatic rice which most families used in very special occasions such as festivals, marriages, anything good and pure is associated with the rice. I had also shared my lack of knowledge of industrial Microwave Ovens. While I do get most small Microwave Ovens like these , cooking in industrial ovens I simply have no clue. Anyways, after that conversation I went back, freshened up a bit and sometime later found myself in the middle of this
Collection of Wine Bottles

Random selection of wine bottles from all over the world.

Also at times found myself in middle of this
Chocolates all around me.

CHOCOLATES

I tried quite a few chocolates but the best one I liked (don t remember the name) was a white caramel chocolate which literally melted into my mouth. Got the whole died and went to heaven experience . Who said gluttony is bad Or this
French Bread, Wine and chaos

French Bread, Wine and chaos

As can be seen the French really enjoy their bread. I do remember a story vaguely (don t remember if it was a children s fairy tale or something) about how the French won a war through their french bread. Or this
Juices for those who love their health

Juices for those who love their health

We also had juices for the teetotaller or who can t handle drinks. Unsurprisingly perhaps, by the end of the session, almost all the different wines were finito while there was still some juices left to go around. From the Indian perspective, it wasn t at all exciting, there were no brawls, everybody was too civilized and everybody staggered off when they met their quota. As I was in holiday spirit, stayed up late, staggered to my room, blissed out and woke up without any headache. Pro tip Drink lots and lots and lots of water especially if you are drinking. It flushes out most of the toxins and also helps in not having after-morning headaches. If I m going drinking, I usually drown myself in at least a litre or two of water, even if I had to the bathroom couple of times before going to bed. All in all, a perfect evening. I was able to connect/talk with some of the gods whom I had wanted to for a long time and they actually listened. Don t remember if I mumbled something or made some sense in small-talk or whatever I did. But as shared, a perfect evening
Filed under: Miscellenous Tagged: #ATM usage, #Canada, #Cheese and Wine party, #Cheese shopping, #Connaught Place Delhi, #Debconf16, #Debit card, #French bread, #Julian cutting, #Mysore Pak, #white caramel chocolate

21 December 2016

Hideki Yamane: considering package delta


From Android Developers Blog: Saving Data: Reducing the size of App Updates by 65%

We should consider providing delta package, especially update packages from security.debian.org, IMO.

Update:
Yes, B lint R czey and others via email pointed out there's debdelta.debian.net for this purpose. But for general usage, it should be intergrated to the infrastructure, without any extra manual setup. Probably apt (as Julian Andres Klode said in his talk in DebConf16and also infrastructure (dak?) would need to be modified to implement it.

Applying delta to daily unstable/testing update may be hard, but security update packages from security.debian.org and stable point release is worse for the effort at least, IMO.

Some rpm distro (Fedora and openSUSE) have already provided delta package, so we can do it, too. Right? :-)

25 November 2016

Julian Andres Klode: Starting the faster, more secure APT 1.4 series

We just released the first beta of APT 1.4 to Debian unstable (beta here means that we don t know any other big stuff to add to it, but are still open to further extensions). This is the release series that will be released with Debian stretch, Ubuntu zesty, and possibly Ubuntu zesty+1 (if the Debian freeze takes a very long time, even zesty+2 is possible). It should reach the master archive in a few hours, and your mirrors shortly after that. Security changes APT 1.4 by default disables support for repositories signed with SHA1 keys. I announced back in January that it was my intention to do this during the summer for development releases, but I only remembered the Jan 1st deadline for stable releases supporting that (APT 1.2 and 1.3), so better late than never. Around January 1st, the same or a similar change will occur in the APT 1.2 and 1.3 series in Ubuntu 16.04 and 16.10 (subject to approval by Ubuntu s release team). This should mean that repository provides had about one year to fix their repositories, and more than 8 months since the release of 16.04. I believe that 8 months is a reasonable time frame to upgrade a repository signing key, and hope that providers who have not updated their repositories yet will do so as soon as possible. Performance work APT 1.4 provides a 10-20% performance increase in cache generation (and according to callgrind, we went from approx 6.8 billion to 5.3 billion instructions for my laptop s configuration, a reduction of more than 21%). The major improvements are: We switched the parsing of Deb822 files (such as Packages files) to my perfect hash function TrieHash. TrieHash which generates C code from a set of words is about equal or twice as fast as the previously used hash function (and two to three times faster than gperf), and we save an additional 50% of that time as we only have to hash once during parsing now, instead of during look up as well. APT 1.4 marks the first time TrieHash is used in any software. I hope that it will spread to dpkg and other software at a later point in time.vendors. Another important change was to drop normalization of Description-MD5 values, the fields mapping a description in a Packages files to a translated description. We used to parse the hex digits into a native binary stream, and then compared it back to hex digits for comparisons, which cost us about 5% of the run time performance. We also optimized one of our hash functions the VersionHash that hashes the important fields of a package to recognize packages with the same version, but different content to not normalize data to a temporary buffer anymore. This buffer has been the subject of some bugs (overflow, incompleteness) in the recent past, and also caused some slowdown due to the additional writes to the stack. Instead, we now pass the bytes we are interested in directly to our CRC code, one byte at a time. There were also some other micro-optimisations: For example, the hash tables in the cache used to be ordered by standard compare (alphabetical followed by shortest). It is now ordered by size first, meaning we can avoid data comparisons for strings of different lengths. We also got rid of a std::string that cannot use short string optimisation in a hot path of the code. Finally, we also converted our case-insensitive djb hashes to not use a normal tolower_ascii(), but introduced tolower_ascii_unsafe() which just sets the lowercase bit ( 0x20) in the character. Others For a more complete overview of all changes, consult the changelog.
Filed under: Debian, Ubuntu

17 November 2016

Urvika Gola: Reaching out to Outreachy

The past few weeks have been a whirlwind of work with my application process for Outreachy taking my full attention. When I got to know about Outreachy, I was intrigued as well as dubious. I had many unanswered questions in my mind. Honestly, I had that fear of failure which prevented me from submitting my partially filled application form. I kept contemplating if the application was good enough , if my answers were perfect and had the right balance of du-uh and oh-ah!
In moments of doubt, it is important to surround yourself with people who believe in you more than you believe in yourself. I ve been fortunate enough to have two amazing people, my sister Anjali, who is an engineer at Intel and my friend, Pranav Jain who completed his GSoC 16 with Debian.
They believed in me when I sat staring at my application and encouraged me to click that final button. When I initially applied for Outreachy, I was given a task for building Lumicall and subsequent task was to examine a BASH script which solves the DNS-01 Challenge.
I deployed the DNS-01 challenge in Java and tested my solution against a server.
Within a limited time frame, I figured things out and wrote my solution in Java and then eagerly waited for the results to come out. Going through a full cycle of : lifecycle.JPG I was elated with joy when I got to know I ve been selected for Outreachy to work with Debian. I was excited about open source & found the idea of working on the project open source fun because of the numerous possibilities of contributing towards a voice video and chat communication software.
My project mentor, Daniel Pocock, played a pivotal role in the time after I had submitted my application. Like a true mentor, he replied to my queries promptly and guided me towards finding the solutions to problems on my own. He exemplified how to feel comfortable with developing on open source. I felt inspired and encouraged to move along in my work. Beyond him, The MiniDebConf was when I was finally introduced to the Debian community. It was an overwhelming experience and I felt proud to have come so far.. It was pretty cool to see JitsiMeet being used for this video call. I was also introduced to two of my mentors , Juliana Louback & Bruno Magalh es . I am very excited to learn from them.

I am glad I applied for Outreachy which helped me identify my strengths and I am totally excited to be working with Debian on the project and learn as much as I can throughout the period. I am not a blog person, this is my first blog ever! I would love to share my experience with you all in the hopes of inspiring someone else who is afraid of clicking that final button!

Next.

Previous.